This project focuses on designing and optimizing a convolutional neural network (CNN) for the classification of political memes into conservative and liberal viewpoints, aiming to achieve the highest possible accuracy on a validation set. The process includes employing techniques like early stopping and model checkpointing to monitor performance, and making adjustments to the CNN architecture based on changes in accuracy related to hyperparameters such as filters and layers. The task emphasizes reproducibility through a pre-determined data split and involves the creation of separate data generators for training, validation, and testing. The project’s success will be ultimately evaluated on an independent test set, with learning curves and detailed analysis included in the final report.
For this project, the image dataset was sourced by exploring meme-focused pages on social platforms such as Reddit, Facebook, and Pinterest. A total of 1,000 images were collected, with an equal distribution between conservative and liberal political themes.
Special thanks to Kate Arendes for contributions to the collection process.
import PIL
import numpy as np
from PIL import Image
from keras import layers
from tensorflow import keras
from keras import regularizers
from google.colab import drive
import matplotlib.pyplot as plt
from keras.metrics import Precision
from keras.preprocessing.image import ImageDataGenerator
First, lets download the images from google drive:# Let's mount the drive to load the images
drive.mount('/content/drive')
Mounted at /content/drive
# Let's set the base directory for loading the political meme images
base_directory = "/content/drive/My Drive/Political Meme Dataset/"
# Let's initialize the ImageDataGenerator with rescaling to normalize pixel values
my_generator = ImageDataGenerator(rescale=1./255)
# Let's set up the training data generator
# This loads images of size 150x150, in batches of 4, with binary class labels
train_generator = my_generator.flow_from_directory(
f"{base_directory}/training/",
target_size=(150, 150),
batch_size=4,
class_mode='binary'
)
# Let's set up the validation data generator
# Loads images of the same size and batch size as the training generator
valid_generator = my_generator.flow_from_directory(
f"{base_directory}/validation/",
target_size=(150, 150),
batch_size=4,
class_mode='binary'
)
# Let's set up the test data generator
# Uses the same parameters for consistency across training, validation, and testing
test_generator = my_generator.flow_from_directory(
f"{base_directory}/test/",
target_size=(150, 150),
batch_size=4,
class_mode='binary'
)
Found 600 images belonging to 2 classes. Found 200 images belonging to 2 classes. Found 200 images belonging to 2 classes.
# Let's load a single image using PIL library.
image = Image.open(f"{base_directory}/training/train_liberal/0f76446d7d65a9e6508a226ae33e8a51--felder-donald-oconnor.jpg")
# Let's get some details about the image.
print("Image Mode -->", image.mode)
print("Image Format --> ", image.format)
print("Image Size -->", image.size)
Image Mode --> RGB Image Format --> JPEG Image Size --> (118, 108)
# Let's display the colored image
plt.imshow(np.asarray(image))
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x7c63946f7f40>
# Let's convert the input image to grayscale
gs_image = image.convert(mode='L')
# Let's display the grayscale image using matplotlib
plt.imshow(np.asarray(gs_image), cmap='gray')
<matplotlib.image.AxesImage at 0x7c63945eb3d0>
# Let's resize the image to 200x200 pixels
img_resized = image.resize((200,200))
# Let's print the size of the resized image to verify the new dimensions
print(img_resized.size)
# Let's display the resized image using matplotlib
plt.imshow(np.asarray(img_resized))
(200, 200)
<matplotlib.image.AxesImage at 0x7c639466fc10>
# Let's loop through batches of images from the train generator
for my_batch in train_generator:
images = my_batch[0]
labels = my_batch[1]
# Let's iterate over each image and its corresponding label in the batch
for i in range(len(labels)):
plt.imshow(images[i])
plt.colorbar()
plt.show()
# Let's print the label associated with the image
print(labels[i])
break
1.0
1.0
1.0
1.0
# Let's loop through batches of images from the validation generator
for my_batch in valid_generator:
images = my_batch[0]
labels = my_batch[1]
# Let's iterate over each image and its corresponding label in the batch
for i in range(len(labels)):
plt.imshow(images[i])
plt.colorbar()
plt.show()
# Let's print the label associated with the image
print(labels[i])
break
0.0
0.0
0.0
0.0
# Let's loop through batches of images from the test generator
for my_batch in test_generator:
images = my_batch[0]
labels = my_batch[1]
# Let's iterate over each image and its corresponding label in the batch
for i in range(len(labels)):
plt.imshow(images[i])
plt.colorbar()
plt.show()
# Let's print the label associated with the image
print(labels[i])
break
0.0
0.0
1.0
1.0
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 150, 150, 3)] 0
conv2d (Conv2D) (None, 150, 150, 32) 896
max_pooling2d (MaxPooling2 (None, 75, 75, 32) 0
D)
conv2d_1 (Conv2D) (None, 75, 75, 64) 18496
max_pooling2d_1 (MaxPoolin (None, 37, 37, 64) 0
g2D)
conv2d_2 (Conv2D) (None, 37, 37, 128) 73856
max_pooling2d_2 (MaxPoolin (None, 18, 18, 128) 0
g2D)
conv2d_3 (Conv2D) (None, 18, 18, 128) 147584
max_pooling2d_3 (MaxPoolin (None, 9, 9, 128) 0
g2D)
conv2d_4 (Conv2D) (None, 9, 9, 256) 295168
max_pooling2d_4 (MaxPoolin (None, 4, 4, 256) 0
g2D)
global_average_pooling2d ( (None, 256) 0
GlobalAveragePooling2D)
dropout (Dropout) (None, 256) 0
dense (Dense) (None, 1) 257
=================================================================
Total params: 536257 (2.05 MB)
Trainable params: 536257 (2.05 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history = model.fit(train_generator, validation_data = valid_generator, epochs = 10, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/10 150/150 [==============================] - 199s 1s/step - loss: 0.6202 - accuracy: 0.6500 - precision: 0.7320 - val_loss: 0.4253 - val_accuracy: 0.8500 - val_precision: 0.7734 Epoch 2/10 150/150 [==============================] - 5s 32ms/step - loss: 0.5194 - accuracy: 0.7867 - precision: 0.7350 - val_loss: 0.2858 - val_accuracy: 0.9350 - val_precision: 0.8991 Epoch 3/10 150/150 [==============================] - 4s 26ms/step - loss: 0.4284 - accuracy: 0.8450 - precision: 0.8090 - val_loss: 0.3491 - val_accuracy: 0.8600 - val_precision: 0.7812 Epoch 4/10 150/150 [==============================] - 4s 25ms/step - loss: 0.3633 - accuracy: 0.8717 - precision: 0.8431 - val_loss: 0.3017 - val_accuracy: 0.8750 - val_precision: 0.8000 Epoch 5/10 150/150 [==============================] - 5s 34ms/step - loss: 0.3063 - accuracy: 0.8950 - precision: 0.8762 - val_loss: 0.1791 - val_accuracy: 0.9350 - val_precision: 0.8991 Epoch 6/10 150/150 [==============================] - 5s 33ms/step - loss: 0.2899 - accuracy: 0.8933 - precision: 0.8856 - val_loss: 0.1402 - val_accuracy: 0.9550 - val_precision: 0.9333 Epoch 7/10 150/150 [==============================] - 4s 25ms/step - loss: 0.3007 - accuracy: 0.8850 - precision: 0.8714 - val_loss: 0.1609 - val_accuracy: 0.9450 - val_precision: 0.9083 Epoch 8/10 150/150 [==============================] - 4s 26ms/step - loss: 0.2741 - accuracy: 0.8933 - precision: 0.8758 - val_loss: 0.1597 - val_accuracy: 0.9400 - val_precision: 0.9490 Epoch 9/10 150/150 [==============================] - 4s 26ms/step - loss: 0.2322 - accuracy: 0.9117 - precision: 0.9049 - val_loss: 0.2915 - val_accuracy: 0.8750 - val_precision: 0.8000 Epoch 10/10 150/150 [==============================] - 5s 33ms/step - loss: 0.2330 - accuracy: 0.9150 - precision: 0.9055 - val_loss: 0.1091 - val_accuracy: 0.9650 - val_precision: 0.9346
train_accuracy = history.history["accuracy"]
train_loss = history.history["loss"]
train_precision = history.history["precision"]
val_accuracy = history.history["val_accuracy"]
val_loss = history.history["val_loss"]
val_precision = history.history["val_precision"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 70s 1s/step - loss: 0.1488 - accuracy: 0.9450 - precision: 0.9083
[0.14878995716571808, 0.9449999928474426, 0.9082568883895874]
The initial model architecture resulted in a training accuracy of 0.9150 and a validation accuracy of 0.9650 after being trained for 10 epochs. The next steps include increasing the number of epochs to 50 to observe how the training and validation accuracies change across different epochs. This extended training period will help determine if the model is benefiting from more training time or if it begins to overfit the training data. Observing the trend in validation accuracy will also indicate whether the model generalizes well to unseen data. Additional measures, such as implementing early stopping or adjusting the learning rate, may be considered based on the outcomes observed at different epochs.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_increase_epochs = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_increase_epochs.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="base_model_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_increase_epochs = model_increase_epochs.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 34ms/step - loss: 0.6681 - accuracy: 0.6400 - precision_1: 0.7283 - val_loss: 0.5018 - val_accuracy: 0.7050 - val_precision_1: 0.9767 Epoch 2/30 150/150 [==============================] - 5s 32ms/step - loss: 0.5604 - accuracy: 0.7417 - precision_1: 0.6986 - val_loss: 0.3345 - val_accuracy: 0.9050 - val_precision_1: 0.8462 Epoch 3/30 150/150 [==============================] - 5s 32ms/step - loss: 0.4619 - accuracy: 0.8300 - precision_1: 0.7878 - val_loss: 0.2333 - val_accuracy: 0.9350 - val_precision_1: 0.9485 Epoch 4/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3606 - accuracy: 0.8667 - precision_1: 0.8481 - val_loss: 0.1744 - val_accuracy: 0.9450 - val_precision_1: 0.9083 Epoch 5/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3189 - accuracy: 0.8867 - precision_1: 0.8648 - val_loss: 0.3221 - val_accuracy: 0.8650 - val_precision_1: 0.7874 Epoch 6/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2795 - accuracy: 0.8900 - precision_1: 0.8750 - val_loss: 0.2305 - val_accuracy: 0.9100 - val_precision_1: 0.8534 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2659 - accuracy: 0.8983 - precision_1: 0.8893 - val_loss: 0.2251 - val_accuracy: 0.9100 - val_precision_1: 0.8475 Epoch 8/30 150/150 [==============================] - 5s 34ms/step - loss: 0.2727 - accuracy: 0.9000 - precision_1: 0.8774 - val_loss: 0.1218 - val_accuracy: 0.9700 - val_precision_1: 0.9434 Epoch 9/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2570 - accuracy: 0.9000 - precision_1: 0.8822 - val_loss: 0.1621 - val_accuracy: 0.9400 - val_precision_1: 0.8929 Epoch 10/30 150/150 [==============================] - 5s 32ms/step - loss: 0.2214 - accuracy: 0.9183 - precision_1: 0.9115 - val_loss: 0.1031 - val_accuracy: 0.9800 - val_precision_1: 0.9615 Epoch 11/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2301 - accuracy: 0.9267 - precision_1: 0.9051 - val_loss: 0.1854 - val_accuracy: 0.9300 - val_precision_1: 0.8772 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1989 - accuracy: 0.9167 - precision_1: 0.9139 - val_loss: 0.1405 - val_accuracy: 0.9500 - val_precision_1: 0.9091 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1891 - accuracy: 0.9300 - precision_1: 0.9272 - val_loss: 0.1747 - val_accuracy: 0.9450 - val_precision_1: 0.9009 Epoch 14/30 150/150 [==============================] - 5s 32ms/step - loss: 0.1630 - accuracy: 0.9433 - precision_1: 0.9375 - val_loss: 0.0907 - val_accuracy: 0.9800 - val_precision_1: 0.9615 Epoch 15/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1811 - accuracy: 0.9367 - precision_1: 0.9253 - val_loss: 0.1499 - val_accuracy: 0.9450 - val_precision_1: 0.9684 Epoch 16/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2170 - accuracy: 0.9217 - precision_1: 0.9068 - val_loss: 0.1063 - val_accuracy: 0.9800 - val_precision_1: 0.9706 Epoch 17/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1379 - accuracy: 0.9467 - precision_1: 0.9437 - val_loss: 0.1222 - val_accuracy: 0.9550 - val_precision_1: 0.9174 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1645 - accuracy: 0.9500 - precision_1: 0.9470 - val_loss: 0.1143 - val_accuracy: 0.9750 - val_precision_1: 0.9524 Epoch 19/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1494 - accuracy: 0.9550 - precision_1: 0.9505 - val_loss: 0.0843 - val_accuracy: 0.9700 - val_precision_1: 0.9434 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1190 - accuracy: 0.9567 - precision_1: 0.9567 - val_loss: 0.1034 - val_accuracy: 0.9650 - val_precision_1: 0.9346 Epoch 21/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1254 - accuracy: 0.9517 - precision_1: 0.9502 - val_loss: 0.0670 - val_accuracy: 0.9800 - val_precision_1: 0.9800 Epoch 22/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0907 - accuracy: 0.9700 - precision_1: 0.9669 - val_loss: 0.0982 - val_accuracy: 0.9600 - val_precision_1: 0.9259 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1091 - accuracy: 0.9600 - precision_1: 0.9539 - val_loss: 0.1310 - val_accuracy: 0.9500 - val_precision_1: 0.9091 Epoch 24/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0839 - accuracy: 0.9750 - precision_1: 0.9766 - val_loss: 0.1317 - val_accuracy: 0.9500 - val_precision_1: 0.9167 Epoch 25/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1083 - accuracy: 0.9633 - precision_1: 0.9572 - val_loss: 0.1508 - val_accuracy: 0.9500 - val_precision_1: 0.9091 Epoch 26/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1124 - accuracy: 0.9650 - precision_1: 0.9635 - val_loss: 0.1365 - val_accuracy: 0.9450 - val_precision_1: 0.9009 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0669 - accuracy: 0.9850 - precision_1: 0.9770 - val_loss: 0.1400 - val_accuracy: 0.9650 - val_precision_1: 0.9346 Epoch 28/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0420 - accuracy: 0.9917 - precision_1: 0.9900 - val_loss: 0.1045 - val_accuracy: 0.9700 - val_precision_1: 0.9434 Epoch 29/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0463 - accuracy: 0.9900 - precision_1: 0.9836 - val_loss: 0.0764 - val_accuracy: 0.9700 - val_precision_1: 0.9608 Epoch 30/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0541 - accuracy: 0.9833 - precision_1: 0.9770 - val_loss: 0.2213 - val_accuracy: 0.9450 - val_precision_1: 0.9238
train_accuracy = history_increase_epochs.history["accuracy"]
train_loss = history_increase_epochs.history["loss"]
train_precision = history_increase_epochs.history["precision_1"]
val_accuracy = history_increase_epochs.history["val_accuracy"]
val_loss = history_increase_epochs.history["val_loss"]
val_precision = history_increase_epochs.history["val_precision_1"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
best_model = keras.models.load_model("base_model_checkpoint_filepath")
best_model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.1348 - accuracy: 0.9650 - precision_1: 0.9429
[0.13483797013759613, 0.9649999737739563, 0.9428571462631226]
Increasing the number of epochs from 10 to 30 resulted in an increase in training accuracy from 0.93 to 0.99. Similarly, the validation accuracy increased from 0.9650 to 0.98. As expected, the time required for training increased significantly from 3 minutes to 5 minutes due to the higher number of epochs. This indicates that the model benefited from additional training, as evidenced by the improvements in both training and validation accuracies.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_decrease_layers = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_decrease_layers.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="decrease_layers_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_decrease_layers = model_decrease_layers.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 33ms/step - loss: 0.6517 - accuracy: 0.6250 - precision_2: 0.6728 - val_loss: 0.7375 - val_accuracy: 0.5100 - val_precision_2: 0.5051 Epoch 2/30 150/150 [==============================] - 5s 32ms/step - loss: 0.5341 - accuracy: 0.7783 - precision_2: 0.7434 - val_loss: 0.4411 - val_accuracy: 0.7950 - val_precision_2: 0.7092 Epoch 3/30 150/150 [==============================] - 5s 33ms/step - loss: 0.5041 - accuracy: 0.8083 - precision_2: 0.7577 - val_loss: 0.3115 - val_accuracy: 0.8850 - val_precision_2: 0.8130 Epoch 4/30 150/150 [==============================] - 5s 33ms/step - loss: 0.4163 - accuracy: 0.8483 - precision_2: 0.8083 - val_loss: 0.2468 - val_accuracy: 0.9200 - val_precision_2: 0.8750 Epoch 5/30 150/150 [==============================] - 5s 34ms/step - loss: 0.3555 - accuracy: 0.8583 - precision_2: 0.8308 - val_loss: 0.2346 - val_accuracy: 0.9400 - val_precision_2: 0.9151 Epoch 6/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3436 - accuracy: 0.8700 - precision_2: 0.8405 - val_loss: 0.2314 - val_accuracy: 0.9300 - val_precision_2: 0.9388 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3298 - accuracy: 0.8783 - precision_2: 0.8450 - val_loss: 0.2823 - val_accuracy: 0.8900 - val_precision_2: 0.8197 Epoch 8/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3557 - accuracy: 0.8617 - precision_2: 0.8338 - val_loss: 0.2104 - val_accuracy: 0.9350 - val_precision_2: 0.8919 Epoch 9/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3117 - accuracy: 0.8967 - precision_2: 0.8742 - val_loss: 0.1808 - val_accuracy: 0.9450 - val_precision_2: 0.9495 Epoch 10/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3122 - accuracy: 0.8833 - precision_2: 0.8571 - val_loss: 0.1563 - val_accuracy: 0.9450 - val_precision_2: 0.9588 Epoch 11/30 150/150 [==============================] - 5s 32ms/step - loss: 0.2918 - accuracy: 0.9000 - precision_2: 0.8797 - val_loss: 0.1460 - val_accuracy: 0.9600 - val_precision_2: 0.9423 Epoch 12/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2558 - accuracy: 0.9167 - precision_2: 0.8882 - val_loss: 0.1609 - val_accuracy: 0.9500 - val_precision_2: 0.9091 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2631 - accuracy: 0.9067 - precision_2: 0.8813 - val_loss: 0.1677 - val_accuracy: 0.9550 - val_precision_2: 0.9333 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2219 - accuracy: 0.9067 - precision_2: 0.8861 - val_loss: 0.2368 - val_accuracy: 0.9050 - val_precision_2: 0.8403 Epoch 15/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2449 - accuracy: 0.9200 - precision_2: 0.8987 - val_loss: 0.1926 - val_accuracy: 0.9300 - val_precision_2: 0.8772 Epoch 16/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2258 - accuracy: 0.9200 - precision_2: 0.9038 - val_loss: 0.1580 - val_accuracy: 0.9450 - val_precision_2: 0.9009 Epoch 17/30 150/150 [==============================] - 5s 32ms/step - loss: 0.2009 - accuracy: 0.9317 - precision_2: 0.9218 - val_loss: 0.1254 - val_accuracy: 0.9450 - val_precision_2: 0.9009 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2088 - accuracy: 0.9333 - precision_2: 0.9194 - val_loss: 0.1299 - val_accuracy: 0.9550 - val_precision_2: 0.9174 Epoch 19/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1619 - accuracy: 0.9433 - precision_2: 0.9318 - val_loss: 0.1245 - val_accuracy: 0.9400 - val_precision_2: 0.8929 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1533 - accuracy: 0.9450 - precision_2: 0.9349 - val_loss: 0.1285 - val_accuracy: 0.9450 - val_precision_2: 0.9009 Epoch 21/30 150/150 [==============================] - 5s 32ms/step - loss: 0.1458 - accuracy: 0.9450 - precision_2: 0.9320 - val_loss: 0.0757 - val_accuracy: 0.9800 - val_precision_2: 0.9706 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1408 - accuracy: 0.9533 - precision_2: 0.9387 - val_loss: 0.1408 - val_accuracy: 0.9400 - val_precision_2: 0.9783 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1162 - accuracy: 0.9633 - precision_2: 0.9572 - val_loss: 0.2011 - val_accuracy: 0.9400 - val_precision_2: 0.9490 Epoch 24/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1387 - accuracy: 0.9433 - precision_2: 0.9263 - val_loss: 0.1752 - val_accuracy: 0.9250 - val_precision_2: 0.8696 Epoch 25/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1061 - accuracy: 0.9633 - precision_2: 0.9542 - val_loss: 0.1848 - val_accuracy: 0.9200 - val_precision_2: 0.8621 Epoch 26/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1411 - accuracy: 0.9367 - precision_2: 0.9309 - val_loss: 0.3246 - val_accuracy: 0.8750 - val_precision_2: 0.8000 Epoch 27/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1011 - accuracy: 0.9600 - precision_2: 0.9481 - val_loss: 0.1695 - val_accuracy: 0.9300 - val_precision_2: 0.8772 Epoch 28/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0907 - accuracy: 0.9700 - precision_2: 0.9608 - val_loss: 0.0899 - val_accuracy: 0.9550 - val_precision_2: 0.9174 Epoch 29/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1106 - accuracy: 0.9633 - precision_2: 0.9542 - val_loss: 0.1722 - val_accuracy: 0.9150 - val_precision_2: 0.8547 Epoch 30/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0912 - accuracy: 0.9650 - precision_2: 0.9574 - val_loss: 0.0961 - val_accuracy: 0.9650 - val_precision_2: 0.9515
train_accuracy = history_decrease_layers.history["accuracy"]
train_loss = history_decrease_layers.history["loss"]
train_precision = history_decrease_layers.history["precision_2"]
val_accuracy = history_decrease_layers.history["val_accuracy"]
val_loss = history_decrease_layers.history["val_loss"]
val_precision = history_decrease_layers.history["val_precision_2"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("decrease_layers_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.1183 - accuracy: 0.9600 - precision_2: 0.9340
[0.11833204329013824, 0.9599999785423279, 0.9339622855186462]
Surprisingly, decreasing one convolution and pooling layers by resulted in a training accuracy of 0.9650 and a validation accuracy of 0.9650. The time it took to complete training and validation for 30 epochs was almost 3 minutes. The test performance is similar to the configuration with more layers; the accuracy with fewer layers is 0.9600, while the accuracy with more layers is 0.9650.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_increased_filters = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_increased_filters.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="increase_filters_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_increased_filters = model_increased_filters.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 35ms/step - loss: 0.6864 - accuracy: 0.5667 - precision_5: 0.5758 - val_loss: 0.5555 - val_accuracy: 0.5300 - val_precision_5: 1.0000 Epoch 2/30 150/150 [==============================] - 5s 34ms/step - loss: 0.5739 - accuracy: 0.7783 - precision_5: 0.7434 - val_loss: 0.3989 - val_accuracy: 0.9200 - val_precision_5: 0.8750 Epoch 3/30 150/150 [==============================] - 4s 25ms/step - loss: 0.4320 - accuracy: 0.8250 - precision_5: 0.7893 - val_loss: 0.4026 - val_accuracy: 0.8250 - val_precision_5: 0.7407 Epoch 4/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3670 - accuracy: 0.8533 - precision_5: 0.8272 - val_loss: 0.2222 - val_accuracy: 0.9400 - val_precision_5: 0.9074 Epoch 5/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3517 - accuracy: 0.8600 - precision_5: 0.8418 - val_loss: 0.1955 - val_accuracy: 0.9450 - val_precision_5: 0.9238 Epoch 6/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2847 - accuracy: 0.9017 - precision_5: 0.8925 - val_loss: 0.2342 - val_accuracy: 0.9100 - val_precision_5: 0.8475 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3516 - accuracy: 0.8567 - precision_5: 0.8223 - val_loss: 0.3013 - val_accuracy: 0.8700 - val_precision_5: 0.7937 Epoch 8/30 150/150 [==============================] - 5s 35ms/step - loss: 0.2920 - accuracy: 0.8917 - precision_5: 0.8730 - val_loss: 0.1861 - val_accuracy: 0.9400 - val_precision_5: 0.9000 Epoch 9/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2530 - accuracy: 0.9083 - precision_5: 0.8939 - val_loss: 0.1829 - val_accuracy: 0.9300 - val_precision_5: 0.8772 Epoch 10/30 150/150 [==============================] - 5s 34ms/step - loss: 0.2358 - accuracy: 0.9117 - precision_5: 0.8871 - val_loss: 0.1200 - val_accuracy: 0.9750 - val_precision_5: 0.9524 Epoch 11/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2333 - accuracy: 0.9317 - precision_5: 0.9164 - val_loss: 0.1762 - val_accuracy: 0.9300 - val_precision_5: 0.9778 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2209 - accuracy: 0.9250 - precision_5: 0.9048 - val_loss: 0.2080 - val_accuracy: 0.9250 - val_precision_5: 1.0000 Epoch 13/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1672 - accuracy: 0.9333 - precision_5: 0.9194 - val_loss: 0.1530 - val_accuracy: 0.9400 - val_precision_5: 0.9000 Epoch 14/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1688 - accuracy: 0.9367 - precision_5: 0.9226 - val_loss: 0.0978 - val_accuracy: 0.9750 - val_precision_5: 0.9524 Epoch 15/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1536 - accuracy: 0.9433 - precision_5: 0.9290 - val_loss: 0.1072 - val_accuracy: 0.9650 - val_precision_5: 0.9429 Epoch 16/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1400 - accuracy: 0.9550 - precision_5: 0.9505 - val_loss: 0.1154 - val_accuracy: 0.9550 - val_precision_5: 0.9252 Epoch 17/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1864 - accuracy: 0.9400 - precision_5: 0.9342 - val_loss: 0.1474 - val_accuracy: 0.9500 - val_precision_5: 0.9091 Epoch 18/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1411 - accuracy: 0.9517 - precision_5: 0.9385 - val_loss: 0.1903 - val_accuracy: 0.9450 - val_precision_5: 1.0000 Epoch 19/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1708 - accuracy: 0.9517 - precision_5: 0.9414 - val_loss: 0.2120 - val_accuracy: 0.9100 - val_precision_5: 0.8534 Epoch 20/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1163 - accuracy: 0.9600 - precision_5: 0.9510 - val_loss: 0.2064 - val_accuracy: 0.9300 - val_precision_5: 0.8772 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1155 - accuracy: 0.9700 - precision_5: 0.9608 - val_loss: 0.3247 - val_accuracy: 0.8700 - val_precision_5: 0.7937 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1435 - accuracy: 0.9567 - precision_5: 0.9507 - val_loss: 0.1269 - val_accuracy: 0.9600 - val_precision_5: 0.9259 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0887 - accuracy: 0.9767 - precision_5: 0.9673 - val_loss: 0.1428 - val_accuracy: 0.9500 - val_precision_5: 0.9091 Epoch 24/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0797 - accuracy: 0.9733 - precision_5: 0.9671 - val_loss: 0.2264 - val_accuracy: 0.9350 - val_precision_5: 0.8850 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1477 - accuracy: 0.9533 - precision_5: 0.9416 - val_loss: 0.1820 - val_accuracy: 0.9150 - val_precision_5: 0.8609 Epoch 26/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1164 - accuracy: 0.9600 - precision_5: 0.9481 - val_loss: 0.1921 - val_accuracy: 0.9200 - val_precision_5: 0.8621 Epoch 27/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0966 - accuracy: 0.9700 - precision_5: 0.9608 - val_loss: 0.1543 - val_accuracy: 0.9500 - val_precision_5: 0.9167 Epoch 28/30 150/150 [==============================] - 5s 33ms/step - loss: 0.0981 - accuracy: 0.9667 - precision_5: 0.9636 - val_loss: 0.0712 - val_accuracy: 0.9600 - val_precision_5: 0.9340 Epoch 29/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0729 - accuracy: 0.9817 - precision_5: 0.9769 - val_loss: 0.0907 - val_accuracy: 0.9750 - val_precision_5: 0.9524 Epoch 30/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0577 - accuracy: 0.9817 - precision_5: 0.9738 - val_loss: 0.1185 - val_accuracy: 0.9550 - val_precision_5: 0.9505
train_accuracy = history_increased_filters.history["accuracy"]
train_loss = history_increased_filters.history["loss"]
train_precision = history_increased_filters.history["precision_5"]
val_accuracy = history_increased_filters.history["val_accuracy"]
val_loss = history_increased_filters.history["val_loss"]
val_precision = history_increased_filters.history["val_precision_5"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("increase_filters_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.2045 - accuracy: 0.9400 - precision_5: 0.9000
[0.20448975265026093, 0.9399999976158142, 0.8999999761581421]
Surprisingly, increasing the number of convolution filters from 128 to 256 in two convolution layers resulted in a training accuracy of 0.98 and a validation accuracy of 0.9550. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is same to the configuration with more filters; the accuracy with fewer filters is 0.94, while the accuracy with base configuration is 0.9650.
# Define the input shape and number of classes
input_shape = (100, 100, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_decreased_image_size = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_decreased_image_size.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="decrease_image_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_decreased_image_size = model_decreased_image_size.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 34ms/step - loss: 0.6990 - accuracy: 0.5267 - precision_6: 0.5229 - val_loss: 0.6786 - val_accuracy: 0.6350 - val_precision_6: 0.9655 Epoch 2/30 150/150 [==============================] - 5s 33ms/step - loss: 0.6365 - accuracy: 0.6950 - precision_6: 0.6746 - val_loss: 0.4814 - val_accuracy: 0.8950 - val_precision_6: 0.8264 Epoch 3/30 150/150 [==============================] - 5s 33ms/step - loss: 0.5423 - accuracy: 0.7833 - precision_6: 0.7273 - val_loss: 0.3751 - val_accuracy: 0.8900 - val_precision_6: 0.8197 Epoch 4/30 150/150 [==============================] - 5s 33ms/step - loss: 0.4209 - accuracy: 0.8333 - precision_6: 0.7959 - val_loss: 0.1950 - val_accuracy: 0.9450 - val_precision_6: 0.9238 Epoch 5/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3331 - accuracy: 0.8650 - precision_6: 0.8349 - val_loss: 0.2097 - val_accuracy: 0.9400 - val_precision_6: 0.9583 Epoch 6/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3599 - accuracy: 0.8600 - precision_6: 0.8333 - val_loss: 0.2302 - val_accuracy: 0.9250 - val_precision_6: 0.8696 Epoch 7/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3221 - accuracy: 0.8850 - precision_6: 0.8554 - val_loss: 0.1674 - val_accuracy: 0.9450 - val_precision_6: 0.9083 Epoch 8/30 150/150 [==============================] - 5s 35ms/step - loss: 0.2818 - accuracy: 0.9050 - precision_6: 0.8907 - val_loss: 0.1438 - val_accuracy: 0.9650 - val_precision_6: 0.9346 Epoch 9/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2689 - accuracy: 0.9133 - precision_6: 0.9052 - val_loss: 0.2027 - val_accuracy: 0.9150 - val_precision_6: 0.8547 Epoch 10/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2342 - accuracy: 0.9250 - precision_6: 0.9153 - val_loss: 0.1269 - val_accuracy: 0.9600 - val_precision_6: 0.9259 Epoch 11/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2197 - accuracy: 0.9183 - precision_6: 0.9115 - val_loss: 0.1042 - val_accuracy: 0.9750 - val_precision_6: 0.9524 Epoch 12/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1911 - accuracy: 0.9317 - precision_6: 0.9218 - val_loss: 0.0941 - val_accuracy: 0.9700 - val_precision_6: 0.9434 Epoch 13/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1821 - accuracy: 0.9333 - precision_6: 0.9221 - val_loss: 0.0890 - val_accuracy: 0.9650 - val_precision_6: 0.9697 Epoch 14/30 150/150 [==============================] - 5s 32ms/step - loss: 0.1875 - accuracy: 0.9317 - precision_6: 0.9164 - val_loss: 0.0655 - val_accuracy: 0.9900 - val_precision_6: 0.9804 Epoch 15/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1644 - accuracy: 0.9417 - precision_6: 0.9373 - val_loss: 0.0852 - val_accuracy: 0.9800 - val_precision_6: 0.9615 Epoch 16/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1683 - accuracy: 0.9483 - precision_6: 0.9353 - val_loss: 0.1528 - val_accuracy: 0.9350 - val_precision_6: 0.8850 Epoch 17/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1291 - accuracy: 0.9600 - precision_6: 0.9539 - val_loss: 0.0713 - val_accuracy: 0.9750 - val_precision_6: 0.9524 Epoch 18/30 150/150 [==============================] - 5s 35ms/step - loss: 0.1180 - accuracy: 0.9650 - precision_6: 0.9635 - val_loss: 0.0651 - val_accuracy: 0.9800 - val_precision_6: 0.9615 Epoch 19/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1227 - accuracy: 0.9567 - precision_6: 0.9536 - val_loss: 0.1022 - val_accuracy: 0.9600 - val_precision_6: 0.9259 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1468 - accuracy: 0.9500 - precision_6: 0.9412 - val_loss: 0.0757 - val_accuracy: 0.9750 - val_precision_6: 0.9524 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0926 - accuracy: 0.9750 - precision_6: 0.9672 - val_loss: 0.1660 - val_accuracy: 0.9400 - val_precision_6: 0.8929 Epoch 22/30 150/150 [==============================] - 5s 33ms/step - loss: 0.0724 - accuracy: 0.9767 - precision_6: 0.9704 - val_loss: 0.0471 - val_accuracy: 0.9850 - val_precision_6: 0.9709 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1340 - accuracy: 0.9600 - precision_6: 0.9570 - val_loss: 0.0758 - val_accuracy: 0.9700 - val_precision_6: 0.9434 Epoch 24/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0853 - accuracy: 0.9733 - precision_6: 0.9702 - val_loss: 0.1155 - val_accuracy: 0.9600 - val_precision_6: 0.9259 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1207 - accuracy: 0.9583 - precision_6: 0.9479 - val_loss: 0.0936 - val_accuracy: 0.9650 - val_precision_6: 0.9346 Epoch 26/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0702 - accuracy: 0.9750 - precision_6: 0.9672 - val_loss: 0.0794 - val_accuracy: 0.9750 - val_precision_6: 0.9524 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0644 - accuracy: 0.9817 - precision_6: 0.9769 - val_loss: 0.1411 - val_accuracy: 0.9700 - val_precision_6: 0.9434 Epoch 28/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0399 - accuracy: 0.9867 - precision_6: 0.9834 - val_loss: 0.1261 - val_accuracy: 0.9700 - val_precision_6: 0.9434 Epoch 29/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1526 - accuracy: 0.9500 - precision_6: 0.9441 - val_loss: 0.1484 - val_accuracy: 0.9450 - val_precision_6: 0.9890 Epoch 30/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0874 - accuracy: 0.9717 - precision_6: 0.9670 - val_loss: 0.0700 - val_accuracy: 0.9700 - val_precision_6: 0.9519
train_accuracy = history_decreased_image_size.history["accuracy"]
train_loss = history_decreased_image_size.history["loss"]
train_precision = history_decreased_image_size.history["precision_6"]
val_accuracy = history_decreased_image_size.history["val_accuracy"]
val_loss = history_decreased_image_size.history["val_loss"]
val_precision = history_decreased_image_size.history["val_precision_6"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("decrease_image_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 17ms/step - loss: 0.1472 - accuracy: 0.9400 - precision_6: 0.9074
[0.14721837639808655, 0.9399999976158142, 0.9074074029922485]
Decreasing the size of input images by almost 40% resulted in a training accuracy of 0.9717 and a validation accuracy of 0.97. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is identical to the configuration with larger input sizes; the accuracy with the smaller image size is 0.94, while the accuracy with the larger image size is 0.9650.
# Let's load the best performing model
best_model = keras.models.load_model("base_model_checkpoint_filepath")
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_batch_normalization = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_batch_normalization.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="batch_normalization_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_batch_normalization = model_batch_normalization.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 9s 44ms/step - loss: 0.6761 - accuracy: 0.7333 - precision_17: 0.7244 - val_loss: 1.2668 - val_accuracy: 0.5000 - val_precision_17: 0.5000 Epoch 2/30 150/150 [==============================] - 5s 36ms/step - loss: 0.4533 - accuracy: 0.8183 - precision_17: 0.8111 - val_loss: 1.0875 - val_accuracy: 0.5000 - val_precision_17: 0.5000 Epoch 3/30 150/150 [==============================] - 5s 36ms/step - loss: 0.4041 - accuracy: 0.8517 - precision_17: 0.8414 - val_loss: 1.0772 - val_accuracy: 0.5200 - val_precision_17: 0.5102 Epoch 4/30 150/150 [==============================] - 5s 35ms/step - loss: 0.3635 - accuracy: 0.8717 - precision_17: 0.8540 - val_loss: 0.7409 - val_accuracy: 0.6150 - val_precision_17: 0.5650 Epoch 5/30 150/150 [==============================] - 6s 37ms/step - loss: 0.3276 - accuracy: 0.8800 - precision_17: 0.8701 - val_loss: 0.3036 - val_accuracy: 0.9050 - val_precision_17: 0.9355 Epoch 6/30 150/150 [==============================] - 5s 36ms/step - loss: 0.3076 - accuracy: 0.8950 - precision_17: 0.8963 - val_loss: 0.1171 - val_accuracy: 0.9600 - val_precision_17: 0.9259 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2318 - accuracy: 0.9133 - precision_17: 0.9133 - val_loss: 0.1324 - val_accuracy: 0.9550 - val_precision_17: 0.9252 Epoch 8/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2639 - accuracy: 0.8917 - precision_17: 0.8904 - val_loss: 0.2576 - val_accuracy: 0.9050 - val_precision_17: 0.8403 Epoch 9/30 150/150 [==============================] - 4s 27ms/step - loss: 0.2358 - accuracy: 0.9067 - precision_17: 0.8961 - val_loss: 0.2604 - val_accuracy: 0.9050 - val_precision_17: 0.8403 Epoch 10/30 150/150 [==============================] - 4s 27ms/step - loss: 0.2344 - accuracy: 0.9117 - precision_17: 0.9076 - val_loss: 0.1925 - val_accuracy: 0.9150 - val_precision_17: 0.8547 Epoch 11/30 150/150 [==============================] - 4s 27ms/step - loss: 0.2376 - accuracy: 0.9150 - precision_17: 0.9220 - val_loss: 0.1966 - val_accuracy: 0.9500 - val_precision_17: 0.9787 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1783 - accuracy: 0.9433 - precision_17: 0.9493 - val_loss: 0.1964 - val_accuracy: 0.9200 - val_precision_17: 0.8621 Epoch 13/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1994 - accuracy: 0.9200 - precision_17: 0.9228 - val_loss: 0.1560 - val_accuracy: 0.9350 - val_precision_17: 0.8850 Epoch 14/30 150/150 [==============================] - 6s 42ms/step - loss: 0.1685 - accuracy: 0.9467 - precision_17: 0.9467 - val_loss: 0.1069 - val_accuracy: 0.9700 - val_precision_17: 0.9434 Epoch 15/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1327 - accuracy: 0.9483 - precision_17: 0.9622 - val_loss: 0.2669 - val_accuracy: 0.8900 - val_precision_17: 0.9756 Epoch 16/30 150/150 [==============================] - 4s 28ms/step - loss: 0.1682 - accuracy: 0.9333 - precision_17: 0.9452 - val_loss: 0.3123 - val_accuracy: 0.8400 - val_precision_17: 0.7576 Epoch 17/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1194 - accuracy: 0.9533 - precision_17: 0.9503 - val_loss: 0.1576 - val_accuracy: 0.9400 - val_precision_17: 0.8929 Epoch 18/30 150/150 [==============================] - 6s 38ms/step - loss: 0.1085 - accuracy: 0.9550 - precision_17: 0.9565 - val_loss: 0.1016 - val_accuracy: 0.9650 - val_precision_17: 0.9429 Epoch 19/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1791 - accuracy: 0.9333 - precision_17: 0.9305 - val_loss: 0.1232 - val_accuracy: 0.9550 - val_precision_17: 0.9691 Epoch 20/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1227 - accuracy: 0.9533 - precision_17: 0.9595 - val_loss: 0.1042 - val_accuracy: 0.9600 - val_precision_17: 0.9259 Epoch 21/30 150/150 [==============================] - 4s 29ms/step - loss: 0.1042 - accuracy: 0.9633 - precision_17: 0.9603 - val_loss: 0.4360 - val_accuracy: 0.8350 - val_precision_17: 0.9855 Epoch 22/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0894 - accuracy: 0.9633 - precision_17: 0.9664 - val_loss: 0.1804 - val_accuracy: 0.9350 - val_precision_17: 0.8850 Epoch 23/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0939 - accuracy: 0.9700 - precision_17: 0.9669 - val_loss: 0.2995 - val_accuracy: 0.8900 - val_precision_17: 0.8197 Epoch 24/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1291 - accuracy: 0.9517 - precision_17: 0.9443 - val_loss: 0.1474 - val_accuracy: 0.9550 - val_precision_17: 0.9174 Epoch 25/30 150/150 [==============================] - 4s 28ms/step - loss: 0.0559 - accuracy: 0.9800 - precision_17: 0.9768 - val_loss: 0.1308 - val_accuracy: 0.9300 - val_precision_17: 0.9778 Epoch 26/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1047 - accuracy: 0.9667 - precision_17: 0.9762 - val_loss: 0.1964 - val_accuracy: 0.9200 - val_precision_17: 0.8684 Epoch 27/30 150/150 [==============================] - 6s 37ms/step - loss: 0.0535 - accuracy: 0.9817 - precision_17: 0.9833 - val_loss: 0.0745 - val_accuracy: 0.9750 - val_precision_17: 0.9524 Epoch 28/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0779 - accuracy: 0.9750 - precision_17: 0.9734 - val_loss: 0.3192 - val_accuracy: 0.8850 - val_precision_17: 0.8130 Epoch 29/30 150/150 [==============================] - 4s 28ms/step - loss: 0.1301 - accuracy: 0.9617 - precision_17: 0.9632 - val_loss: 0.1836 - val_accuracy: 0.9400 - val_precision_17: 0.9490 Epoch 30/30 150/150 [==============================] - 6s 38ms/step - loss: 0.0813 - accuracy: 0.9683 - precision_17: 0.9607 - val_loss: 0.0551 - val_accuracy: 0.9800 - val_precision_17: 0.9615
train_accuracy = history_batch_normalization.history["accuracy"]
train_loss = history_batch_normalization.history["loss"]
train_precision = history_batch_normalization.history["precision_17"]
val_accuracy = history_batch_normalization.history["val_accuracy"]
val_loss = history_batch_normalization.history["val_loss"]
val_precision = history_batch_normalization.history["val_precision_17"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("batch_normalization_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.0988 - accuracy: 0.9800 - precision_10: 0.9615
[0.09880392998456955, 0.9800000190734863, 0.9615384340286255]
Applying the batch normalization resulted in a training accuracy of 0.9683 and a validation accuracy of 0.98. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance improved compared to the base configuration; the accuracy with batch normalization(4 Layers) is 0.98, while the accuracy with the base configuration is 0.9650.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_batch5_normalization = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_batch5_normalization.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="batch_normalization5__checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history5_batch_normalization = model_batch5_normalization.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 35ms/step - loss: 0.7004 - accuracy: 0.4967 - precision_24: 0.4972 - val_loss: 0.6920 - val_accuracy: 0.7350 - val_precision_24: 0.6911 Epoch 2/30 150/150 [==============================] - 6s 39ms/step - loss: 0.6285 - accuracy: 0.7033 - precision_24: 0.7563 - val_loss: 0.6017 - val_accuracy: 0.6850 - val_precision_24: 0.6135 Epoch 3/30 150/150 [==============================] - 5s 33ms/step - loss: 0.4668 - accuracy: 0.8300 - precision_24: 0.8000 - val_loss: 0.2964 - val_accuracy: 0.9050 - val_precision_24: 0.8462 Epoch 4/30 150/150 [==============================] - 4s 26ms/step - loss: 0.4371 - accuracy: 0.8383 - precision_24: 0.7959 - val_loss: 0.5047 - val_accuracy: 0.7600 - val_precision_24: 0.6757 Epoch 5/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3598 - accuracy: 0.8483 - precision_24: 0.8157 - val_loss: 0.1982 - val_accuracy: 0.9200 - val_precision_24: 0.8621 Epoch 6/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3146 - accuracy: 0.8900 - precision_24: 0.8679 - val_loss: 0.1530 - val_accuracy: 0.9600 - val_precision_24: 0.9340 Epoch 7/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2795 - accuracy: 0.8917 - precision_24: 0.8803 - val_loss: 0.1494 - val_accuracy: 0.9600 - val_precision_24: 0.9510 Epoch 8/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2552 - accuracy: 0.9100 - precision_24: 0.9020 - val_loss: 0.1323 - val_accuracy: 0.9600 - val_precision_24: 0.9510 Epoch 9/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2622 - accuracy: 0.9000 - precision_24: 0.8846 - val_loss: 0.1271 - val_accuracy: 0.9650 - val_precision_24: 0.9346 Epoch 10/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2799 - accuracy: 0.8917 - precision_24: 0.8778 - val_loss: 0.1322 - val_accuracy: 0.9750 - val_precision_24: 0.9524 Epoch 11/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2345 - accuracy: 0.9117 - precision_24: 0.8971 - val_loss: 0.1918 - val_accuracy: 0.9100 - val_precision_24: 0.8475 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2348 - accuracy: 0.9217 - precision_24: 0.9068 - val_loss: 0.1314 - val_accuracy: 0.9750 - val_precision_24: 0.9798 Epoch 13/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2425 - accuracy: 0.9100 - precision_24: 0.9100 - val_loss: 0.1884 - val_accuracy: 0.9300 - val_precision_24: 0.8772 Epoch 14/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2250 - accuracy: 0.9200 - precision_24: 0.9172 - val_loss: 0.1831 - val_accuracy: 0.9150 - val_precision_24: 0.8547 Epoch 15/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1884 - accuracy: 0.9283 - precision_24: 0.9132 - val_loss: 0.1149 - val_accuracy: 0.9700 - val_precision_24: 0.9434 Epoch 16/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2021 - accuracy: 0.9217 - precision_24: 0.9148 - val_loss: 0.1372 - val_accuracy: 0.9500 - val_precision_24: 0.9091 Epoch 17/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1917 - accuracy: 0.9333 - precision_24: 0.9422 - val_loss: 0.1527 - val_accuracy: 0.9350 - val_precision_24: 0.8919 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3035 - accuracy: 0.9083 - precision_24: 0.8964 - val_loss: 0.2074 - val_accuracy: 0.9250 - val_precision_24: 0.8696 Epoch 19/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1823 - accuracy: 0.9300 - precision_24: 0.9188 - val_loss: 0.1164 - val_accuracy: 0.9550 - val_precision_24: 0.9333 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1799 - accuracy: 0.9233 - precision_24: 0.9178 - val_loss: 0.1719 - val_accuracy: 0.9200 - val_precision_24: 0.8621 Epoch 21/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1444 - accuracy: 0.9367 - precision_24: 0.9338 - val_loss: 0.1011 - val_accuracy: 0.9650 - val_precision_24: 0.9346 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1513 - accuracy: 0.9467 - precision_24: 0.9467 - val_loss: 0.1129 - val_accuracy: 0.9500 - val_precision_24: 0.9091 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1156 - accuracy: 0.9583 - precision_24: 0.9568 - val_loss: 0.1490 - val_accuracy: 0.9300 - val_precision_24: 0.8772 Epoch 24/30 150/150 [==============================] - 6s 39ms/step - loss: 0.0976 - accuracy: 0.9633 - precision_24: 0.9572 - val_loss: 0.0761 - val_accuracy: 0.9700 - val_precision_24: 0.9519 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1002 - accuracy: 0.9633 - precision_24: 0.9603 - val_loss: 0.2939 - val_accuracy: 0.9050 - val_precision_24: 0.8403 Epoch 26/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1462 - accuracy: 0.9600 - precision_24: 0.9452 - val_loss: 0.1738 - val_accuracy: 0.9250 - val_precision_24: 0.8696 Epoch 27/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0892 - accuracy: 0.9717 - precision_24: 0.9701 - val_loss: 0.1396 - val_accuracy: 0.9350 - val_precision_24: 0.8850 Epoch 28/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0621 - accuracy: 0.9783 - precision_24: 0.9767 - val_loss: 0.1560 - val_accuracy: 0.9300 - val_precision_24: 0.9778 Epoch 29/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0973 - accuracy: 0.9600 - precision_24: 0.9539 - val_loss: 0.1085 - val_accuracy: 0.9550 - val_precision_24: 0.9417 Epoch 30/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0857 - accuracy: 0.9733 - precision_24: 0.9733 - val_loss: 0.0866 - val_accuracy: 0.9700 - val_precision_24: 0.9434
train_accuracy = history5_batch_normalization.history["accuracy"]
train_loss = history5_batch_normalization.history["loss"]
train_precision = history5_batch_normalization.history["precision_24"]
val_accuracy = history5_batch_normalization.history["val_accuracy"]
val_loss = history5_batch_normalization.history["val_loss"]
val_precision = history5_batch_normalization.history["val_precision_24"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("batch_normalization5__checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 18ms/step - loss: 0.0988 - accuracy: 0.9800 - precision_10: 0.9615
[0.09880390018224716, 0.9800000190734863, 0.9615384340286255]
Applying the batch normalization of 5 layers resulted in a training accuracy of 0.9733 and a validation accuracy of 0.97. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance improved compared to the base configuration; the accuracy with batch normalization(5 Layers) is 0.98, while the accuracy with the base configuration is 0.9650.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.Dropout(0.2)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.2)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_dropout = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_dropout.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="dropout_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_dropout = model_dropout.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 8s 38ms/step - loss: 0.6869 - accuracy: 0.5650 - precision_25: 0.5730 - val_loss: 0.6735 - val_accuracy: 0.5000 - val_precision_25: 0.0000e+00 Epoch 2/30 150/150 [==============================] - 5s 36ms/step - loss: 0.6080 - accuracy: 0.7200 - precision_25: 0.6844 - val_loss: 0.5015 - val_accuracy: 0.8450 - val_precision_25: 0.7674 Epoch 3/30 150/150 [==============================] - 5s 35ms/step - loss: 0.4105 - accuracy: 0.8317 - precision_25: 0.8080 - val_loss: 0.4050 - val_accuracy: 0.9050 - val_precision_25: 0.8462 Epoch 4/30 150/150 [==============================] - 6s 40ms/step - loss: 0.3477 - accuracy: 0.8767 - precision_25: 0.8622 - val_loss: 0.2336 - val_accuracy: 0.9300 - val_precision_25: 0.8839 Epoch 5/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3255 - accuracy: 0.8750 - precision_25: 0.8641 - val_loss: 0.2805 - val_accuracy: 0.9450 - val_precision_25: 0.9159 Epoch 6/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3495 - accuracy: 0.8800 - precision_25: 0.8497 - val_loss: 0.3525 - val_accuracy: 0.9100 - val_precision_25: 0.8475 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2819 - accuracy: 0.9050 - precision_25: 0.8932 - val_loss: 0.2461 - val_accuracy: 0.9200 - val_precision_25: 0.8621 Epoch 8/30 150/150 [==============================] - 5s 34ms/step - loss: 0.2466 - accuracy: 0.9100 - precision_25: 0.9046 - val_loss: 0.2134 - val_accuracy: 0.9450 - val_precision_25: 0.9009 Epoch 9/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2098 - accuracy: 0.9133 - precision_25: 0.9161 - val_loss: 0.3279 - val_accuracy: 0.8700 - val_precision_25: 0.7937 Epoch 10/30 150/150 [==============================] - 5s 34ms/step - loss: 0.2129 - accuracy: 0.9183 - precision_25: 0.9142 - val_loss: 0.1581 - val_accuracy: 0.9750 - val_precision_25: 0.9612 Epoch 11/30 150/150 [==============================] - 5s 35ms/step - loss: 0.1965 - accuracy: 0.9283 - precision_25: 0.9327 - val_loss: 0.1304 - val_accuracy: 0.9800 - val_precision_25: 0.9706 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2004 - accuracy: 0.9233 - precision_25: 0.9233 - val_loss: 0.1393 - val_accuracy: 0.9650 - val_precision_25: 0.9346 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2019 - accuracy: 0.9250 - precision_25: 0.9293 - val_loss: 0.1500 - val_accuracy: 0.9850 - val_precision_25: 0.9709 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1652 - accuracy: 0.9367 - precision_25: 0.9367 - val_loss: 0.1727 - val_accuracy: 0.9400 - val_precision_25: 0.8929 Epoch 15/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1854 - accuracy: 0.9183 - precision_25: 0.9197 - val_loss: 0.1561 - val_accuracy: 0.9550 - val_precision_25: 0.9174 Epoch 16/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1747 - accuracy: 0.9333 - precision_25: 0.9362 - val_loss: 0.2127 - val_accuracy: 0.9750 - val_precision_25: 0.9703 Epoch 17/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1564 - accuracy: 0.9333 - precision_25: 0.9452 - val_loss: 0.1408 - val_accuracy: 0.9450 - val_precision_25: 0.9009 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1332 - accuracy: 0.9450 - precision_25: 0.9556 - val_loss: 0.2705 - val_accuracy: 0.9050 - val_precision_25: 0.8403 Epoch 19/30 150/150 [==============================] - 5s 35ms/step - loss: 0.1242 - accuracy: 0.9467 - precision_25: 0.9558 - val_loss: 0.1186 - val_accuracy: 0.9750 - val_precision_25: 0.9524 Epoch 20/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1387 - accuracy: 0.9483 - precision_25: 0.9529 - val_loss: 0.1568 - val_accuracy: 0.9600 - val_precision_25: 0.9259 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1335 - accuracy: 0.9500 - precision_25: 0.9530 - val_loss: 0.2151 - val_accuracy: 0.9400 - val_precision_25: 0.8929 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1348 - accuracy: 0.9483 - precision_25: 0.9468 - val_loss: 0.1225 - val_accuracy: 0.9600 - val_precision_25: 0.9259 Epoch 23/30 150/150 [==============================] - 5s 35ms/step - loss: 0.0997 - accuracy: 0.9617 - precision_25: 0.9663 - val_loss: 0.0740 - val_accuracy: 0.9800 - val_precision_25: 0.9615 Epoch 24/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1037 - accuracy: 0.9550 - precision_25: 0.9565 - val_loss: 0.0859 - val_accuracy: 0.9700 - val_precision_25: 0.9434 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1102 - accuracy: 0.9667 - precision_25: 0.9667 - val_loss: 0.1357 - val_accuracy: 0.9600 - val_precision_25: 0.9259 Epoch 26/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1508 - accuracy: 0.9467 - precision_25: 0.9527 - val_loss: 0.1149 - val_accuracy: 0.9600 - val_precision_25: 0.9259 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0884 - accuracy: 0.9700 - precision_25: 0.9732 - val_loss: 0.1285 - val_accuracy: 0.9450 - val_precision_25: 0.9009 Epoch 28/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1019 - accuracy: 0.9633 - precision_25: 0.9633 - val_loss: 0.1127 - val_accuracy: 0.9600 - val_precision_25: 0.9340 Epoch 29/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0768 - accuracy: 0.9750 - precision_25: 0.9703 - val_loss: 0.1537 - val_accuracy: 0.9450 - val_precision_25: 0.9009 Epoch 30/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0647 - accuracy: 0.9733 - precision_25: 0.9797 - val_loss: 0.1309 - val_accuracy: 0.9700 - val_precision_25: 0.9608
train_accuracy = history_dropout.history["accuracy"]
train_loss = history_dropout.history["loss"]
train_precision = history_dropout.history["precision_25"]
val_accuracy = history_dropout.history["val_accuracy"]
val_loss = history_dropout.history["val_loss"]
val_precision = history_dropout.history["val_precision_25"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("dropout_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 17ms/step - loss: 0.1304 - accuracy: 0.9600 - precision_25: 0.9259
[0.13037841022014618, 0.9599999785423279, 0.9259259104728699]
Applying the dropout rate of 0.2 resulted in a training accuracy of 0.9733 and a validation accuracy of 0.9700. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is almost similar to the base configuration; the accuracy with dropout rate of 0.2 is 0.96, while the accuracy with the base configuration is 0.9650.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.Dropout(0.4)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.Dropout(0.4)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.4)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_dropout_increment = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_dropout_increment.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="dropout_increment_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_dropout_increment = model_dropout_increment.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 37ms/step - loss: 0.6984 - accuracy: 0.4950 - precision_30: 0.4958 - val_loss: 0.6643 - val_accuracy: 0.5000 - val_precision_30: 0.0000e+00 Epoch 2/30 150/150 [==============================] - 5s 34ms/step - loss: 0.6168 - accuracy: 0.7067 - precision_30: 0.7183 - val_loss: 0.6268 - val_accuracy: 0.6650 - val_precision_30: 0.5988 Epoch 3/30 150/150 [==============================] - 5s 34ms/step - loss: 0.4709 - accuracy: 0.8267 - precision_30: 0.7917 - val_loss: 0.3333 - val_accuracy: 0.8950 - val_precision_30: 0.9438 Epoch 4/30 150/150 [==============================] - 6s 39ms/step - loss: 0.3775 - accuracy: 0.8583 - precision_30: 0.8287 - val_loss: 0.3021 - val_accuracy: 0.9400 - val_precision_30: 0.9151 Epoch 5/30 150/150 [==============================] - 5s 35ms/step - loss: 0.3226 - accuracy: 0.8800 - precision_30: 0.8608 - val_loss: 0.2720 - val_accuracy: 0.9200 - val_precision_30: 0.8621 Epoch 6/30 150/150 [==============================] - 5s 34ms/step - loss: 0.3078 - accuracy: 0.8967 - precision_30: 0.8790 - val_loss: 0.2370 - val_accuracy: 0.9450 - val_precision_30: 0.9009 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3342 - accuracy: 0.8617 - precision_30: 0.8423 - val_loss: 0.2962 - val_accuracy: 0.9400 - val_precision_30: 0.9000 Epoch 8/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3151 - accuracy: 0.8850 - precision_30: 0.8762 - val_loss: 0.3494 - val_accuracy: 0.9000 - val_precision_30: 0.8333 Epoch 9/30 150/150 [==============================] - 5s 34ms/step - loss: 0.3075 - accuracy: 0.8900 - precision_30: 0.8611 - val_loss: 0.2345 - val_accuracy: 0.9400 - val_precision_30: 0.8929 Epoch 10/30 150/150 [==============================] - 5s 34ms/step - loss: 0.2566 - accuracy: 0.9050 - precision_30: 0.8809 - val_loss: 0.1696 - val_accuracy: 0.9700 - val_precision_30: 0.9434 Epoch 11/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2547 - accuracy: 0.9117 - precision_30: 0.8871 - val_loss: 0.2291 - val_accuracy: 0.9450 - val_precision_30: 0.9009 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2390 - accuracy: 0.9050 - precision_30: 0.8932 - val_loss: 0.2285 - val_accuracy: 0.9200 - val_precision_30: 0.8621 Epoch 13/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2253 - accuracy: 0.9250 - precision_30: 0.9100 - val_loss: 0.2704 - val_accuracy: 0.9050 - val_precision_30: 0.8403 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2376 - accuracy: 0.9167 - precision_30: 0.9058 - val_loss: 0.2011 - val_accuracy: 0.9500 - val_precision_30: 0.9167 Epoch 15/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1982 - accuracy: 0.9217 - precision_30: 0.9121 - val_loss: 0.1841 - val_accuracy: 0.9250 - val_precision_30: 0.8696 Epoch 16/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1906 - accuracy: 0.9317 - precision_30: 0.9191 - val_loss: 0.1702 - val_accuracy: 0.9550 - val_precision_30: 0.9174 Epoch 17/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1661 - accuracy: 0.9450 - precision_30: 0.9320 - val_loss: 0.1616 - val_accuracy: 0.9550 - val_precision_30: 0.9174 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2196 - accuracy: 0.9200 - precision_30: 0.9091 - val_loss: 0.1891 - val_accuracy: 0.9400 - val_precision_30: 0.8929 Epoch 19/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1519 - accuracy: 0.9517 - precision_30: 0.9502 - val_loss: 0.1333 - val_accuracy: 0.9750 - val_precision_30: 0.9524 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1707 - accuracy: 0.9383 - precision_30: 0.9283 - val_loss: 0.1405 - val_accuracy: 0.9500 - val_precision_30: 0.9091 Epoch 21/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1308 - accuracy: 0.9533 - precision_30: 0.9474 - val_loss: 0.1307 - val_accuracy: 0.9600 - val_precision_30: 0.9259 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1521 - accuracy: 0.9317 - precision_30: 0.9302 - val_loss: 0.1738 - val_accuracy: 0.9250 - val_precision_30: 0.8696 Epoch 23/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1323 - accuracy: 0.9533 - precision_30: 0.9444 - val_loss: 0.1198 - val_accuracy: 0.9700 - val_precision_30: 0.9434 Epoch 24/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1505 - accuracy: 0.9433 - precision_30: 0.9404 - val_loss: 0.1424 - val_accuracy: 0.9600 - val_precision_30: 0.9259 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1148 - accuracy: 0.9533 - precision_30: 0.9444 - val_loss: 0.1652 - val_accuracy: 0.9400 - val_precision_30: 0.8929 Epoch 26/30 150/150 [==============================] - 6s 40ms/step - loss: 0.0989 - accuracy: 0.9567 - precision_30: 0.9536 - val_loss: 0.0941 - val_accuracy: 0.9750 - val_precision_30: 0.9612 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1373 - accuracy: 0.9517 - precision_30: 0.9562 - val_loss: 0.1340 - val_accuracy: 0.9650 - val_precision_30: 0.9346 Epoch 28/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1002 - accuracy: 0.9633 - precision_30: 0.9572 - val_loss: 0.1697 - val_accuracy: 0.9300 - val_precision_30: 0.8772 Epoch 29/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1272 - accuracy: 0.9567 - precision_30: 0.9477 - val_loss: 0.1124 - val_accuracy: 0.9650 - val_precision_30: 0.9346 Epoch 30/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0730 - accuracy: 0.9733 - precision_30: 0.9733 - val_loss: 0.1621 - val_accuracy: 0.9450 - val_precision_30: 0.9083
train_accuracy = history_dropout_increment.history["accuracy"]
train_loss = history_dropout_increment.history["loss"]
train_precision = history_dropout_increment.history["precision_30"]
val_accuracy = history_dropout_increment.history["val_accuracy"]
val_loss = history_dropout_increment.history["val_loss"]
val_precision = history_dropout_increment.history["val_precision_30"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("dropout_increment_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.1333 - accuracy: 0.9600 - precision_30: 0.9423
[0.13325196504592896, 0.9599999785423279, 0.942307710647583]
Applying the dropout rate of 0.4 resulted in a training accuracy of 0.9477 and a validation accuracy of 0.9650. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is almost similar to the base configuration; the accuracy with dropout rate of 0.4 is 0.96, while the accuracy with the base configuration is 0.9650.
from keras.regularizers import l2
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu', kernel_regularizer=l2(0.01))(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.4)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_with_l2_regularization = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_with_l2_regularization.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="l2_regularization_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_l2_regularization = model_with_l2_regularization.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 35ms/step - loss: 0.8361 - accuracy: 0.5600 - precision_33: 0.5570 - val_loss: 0.7042 - val_accuracy: 0.5050 - val_precision_33: 0.5025 Epoch 2/30 150/150 [==============================] - 6s 38ms/step - loss: 0.6563 - accuracy: 0.6533 - precision_33: 0.6386 - val_loss: 0.5681 - val_accuracy: 0.8650 - val_precision_33: 0.7967 Epoch 3/30 150/150 [==============================] - 5s 34ms/step - loss: 0.5247 - accuracy: 0.8267 - precision_33: 0.7988 - val_loss: 0.4071 - val_accuracy: 0.8800 - val_precision_33: 0.9318 Epoch 4/30 150/150 [==============================] - 5s 33ms/step - loss: 0.4069 - accuracy: 0.8600 - precision_33: 0.8354 - val_loss: 0.2189 - val_accuracy: 0.9350 - val_precision_33: 0.8850 Epoch 5/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3825 - accuracy: 0.8600 - precision_33: 0.8253 - val_loss: 0.3380 - val_accuracy: 0.8650 - val_precision_33: 0.7874 Epoch 6/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3506 - accuracy: 0.8783 - precision_33: 0.8536 - val_loss: 0.1794 - val_accuracy: 0.9550 - val_precision_33: 0.9417 Epoch 7/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3310 - accuracy: 0.8900 - precision_33: 0.8656 - val_loss: 0.2037 - val_accuracy: 0.9400 - val_precision_33: 0.8929 Epoch 8/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2972 - accuracy: 0.8967 - precision_33: 0.8864 - val_loss: 0.1744 - val_accuracy: 0.9500 - val_precision_33: 0.9091 Epoch 9/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2901 - accuracy: 0.9100 - precision_33: 0.8868 - val_loss: 0.1348 - val_accuracy: 0.9700 - val_precision_33: 0.9434 Epoch 10/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2980 - accuracy: 0.8933 - precision_33: 0.8711 - val_loss: 0.2067 - val_accuracy: 0.9250 - val_precision_33: 0.8696 Epoch 11/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2677 - accuracy: 0.9083 - precision_33: 0.9016 - val_loss: 0.1298 - val_accuracy: 0.9600 - val_precision_33: 0.9340 Epoch 12/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2439 - accuracy: 0.9117 - precision_33: 0.9103 - val_loss: 0.2225 - val_accuracy: 0.9050 - val_precision_33: 0.8403 Epoch 13/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2456 - accuracy: 0.9050 - precision_33: 0.8984 - val_loss: 0.1233 - val_accuracy: 0.9750 - val_precision_33: 0.9612 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2562 - accuracy: 0.9150 - precision_33: 0.9082 - val_loss: 0.1954 - val_accuracy: 0.9250 - val_precision_33: 0.8696 Epoch 15/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2496 - accuracy: 0.9117 - precision_33: 0.8971 - val_loss: 0.1271 - val_accuracy: 0.9800 - val_precision_33: 0.9800 Epoch 16/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2265 - accuracy: 0.9167 - precision_33: 0.9058 - val_loss: 0.0955 - val_accuracy: 0.9800 - val_precision_33: 0.9706 Epoch 17/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2198 - accuracy: 0.9200 - precision_33: 0.9145 - val_loss: 0.0891 - val_accuracy: 0.9900 - val_precision_33: 0.9900 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2129 - accuracy: 0.9217 - precision_33: 0.9203 - val_loss: 0.1068 - val_accuracy: 0.9750 - val_precision_33: 0.9524 Epoch 19/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1961 - accuracy: 0.9283 - precision_33: 0.9356 - val_loss: 0.1737 - val_accuracy: 0.9250 - val_precision_33: 0.8696 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1947 - accuracy: 0.9300 - precision_33: 0.9243 - val_loss: 0.0986 - val_accuracy: 0.9700 - val_precision_33: 0.9434 Epoch 21/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1911 - accuracy: 0.9350 - precision_33: 0.9307 - val_loss: 0.1515 - val_accuracy: 0.9500 - val_precision_33: 0.9091 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1801 - accuracy: 0.9300 - precision_33: 0.9300 - val_loss: 0.1071 - val_accuracy: 0.9650 - val_precision_33: 0.9346 Epoch 23/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1820 - accuracy: 0.9283 - precision_33: 0.9327 - val_loss: 0.2091 - val_accuracy: 0.9150 - val_precision_33: 0.8547 Epoch 24/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2003 - accuracy: 0.9333 - precision_33: 0.9140 - val_loss: 0.1321 - val_accuracy: 0.9650 - val_precision_33: 0.9346 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1740 - accuracy: 0.9417 - precision_33: 0.9402 - val_loss: 0.1257 - val_accuracy: 0.9450 - val_precision_33: 0.9009 Epoch 26/30 150/150 [==============================] - 6s 39ms/step - loss: 0.1523 - accuracy: 0.9417 - precision_33: 0.9431 - val_loss: 0.0694 - val_accuracy: 0.9850 - val_precision_33: 0.9709 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1413 - accuracy: 0.9517 - precision_33: 0.9532 - val_loss: 0.0765 - val_accuracy: 0.9700 - val_precision_33: 0.9434 Epoch 28/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1428 - accuracy: 0.9550 - precision_33: 0.9505 - val_loss: 0.0679 - val_accuracy: 0.9700 - val_precision_33: 0.9434 Epoch 29/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1254 - accuracy: 0.9617 - precision_33: 0.9571 - val_loss: 0.0983 - val_accuracy: 0.9700 - val_precision_33: 0.9434 Epoch 30/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1272 - accuracy: 0.9617 - precision_33: 0.9601 - val_loss: 0.0777 - val_accuracy: 0.9800 - val_precision_33: 0.9615
train_accuracy = history_l2_regularization.history["accuracy"]
train_loss = history_l2_regularization.history["loss"]
train_precision = history_l2_regularization.history["precision_33"]
val_accuracy = history_l2_regularization.history["val_accuracy"]
val_loss = history_l2_regularization.history["val_loss"]
val_precision = history_l2_regularization.history["val_precision_33"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("l2_regularization_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 17ms/step - loss: 0.1719 - accuracy: 0.9500 - precision_33: 0.9091
[0.17189747095108032, 0.949999988079071, 0.9090909361839294]
Applying the regularization rate of 0.01 resulted in a training accuracy of 0.9617 and a validation accuracy of 0.98. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is almost similar to the base configuration; the accuracy with dropout rate of 0.4 is 0.95, while the accuracy with the base configuration is 0.9650.
from keras.regularizers import l2
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu', kernel_regularizer=l2(0.01))(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.Dropout(0.4)(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_combined_regularization = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_combined_regularization.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="combined_regularization_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_combined_regularization = model_combined_regularization.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 8s 39ms/step - loss: 1.1566 - accuracy: 0.7483 - precision_37: 0.7427 - val_loss: 1.7763 - val_accuracy: 0.5000 - val_precision_37: 0.5000 Epoch 2/30 150/150 [==============================] - 5s 36ms/step - loss: 0.6717 - accuracy: 0.8033 - precision_37: 0.7935 - val_loss: 1.5272 - val_accuracy: 0.5000 - val_precision_37: 0.5000 Epoch 3/30 150/150 [==============================] - 5s 36ms/step - loss: 0.4514 - accuracy: 0.8567 - precision_37: 0.8429 - val_loss: 0.8710 - val_accuracy: 0.5250 - val_precision_37: 0.5128 Epoch 4/30 150/150 [==============================] - 4s 25ms/step - loss: 0.4557 - accuracy: 0.8267 - precision_37: 0.8025 - val_loss: 1.1297 - val_accuracy: 0.5100 - val_precision_37: 0.5051 Epoch 5/30 150/150 [==============================] - 5s 36ms/step - loss: 0.3643 - accuracy: 0.8667 - precision_37: 0.8503 - val_loss: 0.1640 - val_accuracy: 0.9500 - val_precision_37: 0.9091 Epoch 6/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3865 - accuracy: 0.8583 - precision_37: 0.8391 - val_loss: 0.4131 - val_accuracy: 0.8150 - val_precision_37: 0.7299 Epoch 7/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3573 - accuracy: 0.8667 - precision_37: 0.8571 - val_loss: 0.2770 - val_accuracy: 0.8950 - val_precision_37: 0.8264 Epoch 8/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3627 - accuracy: 0.8867 - precision_37: 0.8648 - val_loss: 0.2524 - val_accuracy: 0.9100 - val_precision_37: 0.8475 Epoch 9/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3490 - accuracy: 0.8650 - precision_37: 0.8308 - val_loss: 0.3145 - val_accuracy: 0.8700 - val_precision_37: 0.7937 Epoch 10/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3214 - accuracy: 0.8983 - precision_37: 0.8842 - val_loss: 0.4606 - val_accuracy: 0.7850 - val_precision_37: 0.6993 Epoch 11/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3149 - accuracy: 0.8800 - precision_37: 0.8540 - val_loss: 0.2650 - val_accuracy: 0.8900 - val_precision_37: 0.8197 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2816 - accuracy: 0.9017 - precision_37: 0.8825 - val_loss: 0.2117 - val_accuracy: 0.9100 - val_precision_37: 0.8475 Epoch 13/30 150/150 [==============================] - 5s 36ms/step - loss: 0.3308 - accuracy: 0.8883 - precision_37: 0.8722 - val_loss: 0.1095 - val_accuracy: 0.9900 - val_precision_37: 0.9900 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2802 - accuracy: 0.9067 - precision_37: 0.8910 - val_loss: 1.2121 - val_accuracy: 0.5250 - val_precision_37: 0.5128 Epoch 15/30 150/150 [==============================] - 4s 27ms/step - loss: 0.3030 - accuracy: 0.8917 - precision_37: 0.8707 - val_loss: 0.1478 - val_accuracy: 0.9950 - val_precision_37: 0.9901 Epoch 16/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2890 - accuracy: 0.9083 - precision_37: 0.8864 - val_loss: 0.3659 - val_accuracy: 0.8500 - val_precision_37: 0.7692 Epoch 17/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2216 - accuracy: 0.9317 - precision_37: 0.9218 - val_loss: 0.1769 - val_accuracy: 0.9550 - val_precision_37: 0.9174 Epoch 18/30 150/150 [==============================] - 4s 27ms/step - loss: 0.3048 - accuracy: 0.9033 - precision_37: 0.8980 - val_loss: 0.1663 - val_accuracy: 0.9750 - val_precision_37: 0.9798 Epoch 19/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3060 - accuracy: 0.8750 - precision_37: 0.8641 - val_loss: 0.6166 - val_accuracy: 0.6750 - val_precision_37: 0.6061 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2697 - accuracy: 0.9233 - precision_37: 0.9071 - val_loss: 0.1570 - val_accuracy: 0.9750 - val_precision_37: 0.9524 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2635 - accuracy: 0.9150 - precision_37: 0.9029 - val_loss: 0.2128 - val_accuracy: 0.9100 - val_precision_37: 0.8475 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2268 - accuracy: 0.9300 - precision_37: 0.9272 - val_loss: 0.3131 - val_accuracy: 0.8600 - val_precision_37: 0.7812 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2216 - accuracy: 0.9350 - precision_37: 0.9251 - val_loss: 0.1500 - val_accuracy: 0.9600 - val_precision_37: 0.9259 Epoch 24/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2576 - accuracy: 0.9233 - precision_37: 0.9233 - val_loss: 0.2220 - val_accuracy: 0.9050 - val_precision_37: 0.8403 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2106 - accuracy: 0.9350 - precision_37: 0.9307 - val_loss: 0.3570 - val_accuracy: 0.8400 - val_precision_37: 0.7576 Epoch 26/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2151 - accuracy: 0.9267 - precision_37: 0.9267 - val_loss: 0.2483 - val_accuracy: 0.8850 - val_precision_37: 0.8130 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2194 - accuracy: 0.9233 - precision_37: 0.9150 - val_loss: 0.3389 - val_accuracy: 0.8400 - val_precision_37: 0.7576 Epoch 28/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2192 - accuracy: 0.9350 - precision_37: 0.9307 - val_loss: 0.1361 - val_accuracy: 0.9650 - val_precision_37: 0.9346 Epoch 29/30 150/150 [==============================] - 6s 42ms/step - loss: 0.1765 - accuracy: 0.9483 - precision_37: 0.9410 - val_loss: 0.1049 - val_accuracy: 0.9800 - val_precision_37: 0.9615 Epoch 30/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1682 - accuracy: 0.9467 - precision_37: 0.9437 - val_loss: 0.1053 - val_accuracy: 0.9750 - val_precision_37: 0.9524
train_accuracy = history_combined_regularization.history["accuracy"]
train_loss = history_combined_regularization.history["loss"]
train_precision = history_combined_regularization.history["precision_37"]
val_accuracy = history_combined_regularization.history["val_accuracy"]
val_loss = history_combined_regularization.history["val_loss"]
val_precision = history_combined_regularization.history["val_precision_37"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("combined_regularization_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.1598 - accuracy: 0.9450 - precision_37: 0.9083
[0.159788116812706, 0.9449999928474426, 0.9082568883895874]
Applying all the regularization techniques such as Batch Normalization, Dropout, and L2 regularization resulted in a training accuracy of 0.9437 and a validation accuracy of 0.9750. The training and validation for 30 epochs took approximately 3 minutes to complete. The test performance is almost similar to the base configuration; the accuracy with combined regularization is 0.95, while the accuracy with the base configuration is 0.9650.
| Regularization Technique | Number of Layers/Parameters | Validation Accuracy | Test Accuracy |
|---|---|---|---|
| Batch Normalization | 4 Layers | 0.98 | 0.98 |
| Batch Normalization | 5 Layers | 0.97 | 0.98 |
| Dropout | Rate=0.2 | 0.97 | 0.96 |
| Dropout | Rate=0.4 | 0.965 | 0.96 |
| L2 Regularization (0.01) | λ=0.01 | 0.98 | 0.95 |
| Combined Technique | Multiple | 0.975 | 0.945 |